最近,使用自动编码器(由使用神经网络建模的编码器,渠道和解码器组成)的通信系统的端到端学习问题最近被证明是一种有希望的方法。实际采用这种学习方法面临的挑战是,在变化的渠道条件(例如无线链接)下,它需要经常对自动编码器进行重新训练,以保持低解码错误率。由于重新培训既耗时又需要大量样本,因此当通道分布迅速变化时,它变得不切实际。我们建议使用不更改编码器和解码器网络的快速和样本(几射击)域的适应方法来解决此问题。不同于常规的训练时间无监督或半监督域的适应性,在这里,我们有一个训练有素的自动编码器,来自源分布,我们希望(在测试时间)使用仅使用一个小标记的数据集和无标记的数据来适应(测试时间)到目标分布。我们的方法着重于基于高斯混合物网络的通道模型,并根据类和组件条件仿射变换制定其适应性。学习的仿射转换用于设计解码器的最佳输入转换以补偿分布变化,并有效地呈现在接近源分布的解码器输入中。在实际MMWAVE FPGA设置以及无线设置共有的许多模拟分布变化上,使用非常少量的目标域样本来证明我们方法在适应时的有效性。
translated by 谷歌翻译
Monte-Carlo Tree Search (MCTS) is an adversarial search paradigm that first found prominence with its success in the domain of computer Go. Early theoretical work established the game-theoretic soundness and convergence bounds for Upper Confidence bounds applied to Trees (UCT), the most popular instantiation of MCTS; however, there remain notable gaps in our understanding of how UCT behaves in practice. In this work, we address one such gap by considering the question of whether UCT can exhibit lookahead pathology -- a paradoxical phenomenon first observed in Minimax search where greater search effort leads to worse decision-making. We introduce a novel family of synthetic games that offer rich modeling possibilities while remaining amenable to mathematical analysis. Our theoretical and experimental results suggest that UCT is indeed susceptible to pathological behavior in a range of games drawn from this family.
translated by 谷歌翻译
A fundamental procedure in the analysis of massive datasets is the construction of similarity graphs. Such graphs play a key role for many downstream tasks, including clustering, classification, graph learning, and nearest neighbor search. For these tasks, it is critical to build graphs which are sparse yet still representative of the underlying data. The benefits of sparsity are twofold: firstly, constructing dense graphs is infeasible in practice for large datasets, and secondly, the runtime of downstream tasks is directly influenced by the sparsity of the similarity graph. In this work, we present $\textit{Stars}$: a highly scalable method for building extremely sparse graphs via two-hop spanners, which are graphs where similar points are connected by a path of length at most two. Stars can construct two-hop spanners with significantly fewer similarity comparisons, which are a major bottleneck for learning based models where comparisons are expensive to evaluate. Theoretically, we demonstrate that Stars builds a graph in nearly-linear time, where approximate nearest neighbors are contained within two-hop neighborhoods. In practice, we have deployed Stars for multiple data sets allowing for graph building at the $\textit{Tera-Scale}$, i.e., for graphs with tens of trillions of edges. We evaluate the performance of Stars for clustering and graph learning, and demonstrate 10~1000-fold improvements in pairwise similarity comparisons compared to different baselines, and 2~10-fold improvement in running time without quality loss.
translated by 谷歌翻译
条件密度的可靠建模对于粒子物理学等定量科学领域很重要。在物理外部的域中,已显示隐式定量位神经网络(IQN)以提供有条件密度的准确模型。我们使用Compact Muon螺线管(CMS)打开数据门户的工具和模拟数据成功地应用IQNS进行喷射仿真和校正。
translated by 谷歌翻译
从现有数据中学习最佳行为是加强学习(RL)中最重要的问题之一。这被称为RL中的“非政策控制”,其中代理的目标是根据从给定策略(称为行为策略)获得的数据计算最佳策略。由于最佳策略可能与行为策略有很大不同,因此与“政体”设置相比,学习最佳行为非常困难,在学习中将利用来自策略更新的新数据。这项工作提出了一种非政策的天然参与者批评算法,该算法利用州行动分布校正来处理外部行为和样本效率的自然政策梯度。具有收敛保证的现有基于天然梯度的参与者批评算法需要固定功能,以近似策略和价值功能。这通常会导致许多RL应用中的次级学习。另一方面,我们提出的算法利用兼容功能,使人们能够使用任意神经网络近似策略和价值功能,并保证收敛到本地最佳策略。我们通过将其与基准RL任务上的香草梯度参与者 - 批评算法进行比较,说明了提出的非政策自然梯度算法的好处。
translated by 谷歌翻译
本文介绍了自回归模型采样的替代方法。根据模型定义的过渡动态,通常按归类模型顺序进行采样。相反,我们提出了一种采样过程,用于初始化具有白噪声的序列,并遵循Langevin动态定义的Markov链在序列的全局日志似然上。该方法并行化采样过程并推广到条件采样。使用自回归模型作为贝叶斯先前,我们可以使用条件可能性或约束来转向生成模型的输出。我们将这些技术应用于视觉和音频域的自回归模型,具有竞争力的音频源分离,超级分辨率和染色。
translated by 谷歌翻译
我们考虑了两个玩家零和游戏的问题。这个问题在文献中制定为Min-Max Markov游戏。该游戏的解决方案是从给定状态开始的最小最大收益称为状态的最小值。在这项工作中,我们使用在文献中成功应用的连续放松技术​​来计算双球员零和游戏的解决方案,以在马尔可夫决策过程的上下文中计算更快的价值迭代算法。我们将连续放松的概念扩展到两个玩家零和游戏的设置。我们表明,在游戏的特殊结构下,该技术有助于更快地计算状态的最大值。然后,我们推导出一种广义的Minimax Q学习算法,当模型信息未知时计算最佳策略。最后,我们证明了利用随机近似技术的提议的广义Minimax Q学习算法的收敛性,在迭代的界限上的假设下。通过实验,我们展示了我们所提出的算法的有效性。
translated by 谷歌翻译